eager mode
Bundle Adjustment in the Eager Mode
Zhan, Zitong, Xu, Huan, Fang, Zihang, Wei, Xinpeng, Hu, Yaoyu, Wang, Chen
Bundle adjustment (BA) is a critical technique in various robotic applications, such as simultaneous localization and mapping (SLAM), augmented reality (AR), and photogrammetry. BA optimizes parameters such as camera poses and 3D landmarks to align them with observations. With the growing importance of deep learning in perception systems, there is an increasing need to integrate BA with deep learning frameworks for enhanced reliability and performance. However, widely-used C++-based BA frameworks, such as GTSAM, g$^2$o, and Ceres, lack native integration with modern deep learning libraries like PyTorch. This limitation affects their flexibility, adaptability, ease of debugging, and overall implementation efficiency. To address this gap, we introduce an eager-mode BA framework seamlessly integrated with PyPose, providing PyTorch-compatible interfaces with high efficiency. Our approach includes GPU-accelerated, differentiable, and sparse operations designed for 2nd-order optimization, Lie group and Lie algebra operations, and linear solvers. Our eager-mode BA on GPU demonstrates substantial runtime efficiency, achieving an average speedup of 18.5$\times$, 22$\times$, and 23$\times$ compared to GTSAM, g$^2$o, and Ceres, respectively.
- North America > United States > Texas (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New York > Erie County > Buffalo (0.04)
How to Implement YoloV3 in Tensorflow 2.0
This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices. I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset. For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API. For example you can use Microsoft VOTT to generate such dataset. You can also use this script to create the pascal voc dataset.
How Nvidia's CUDA Monopoly In Machine Learning Is Breaking - OpenAI Triton And PyTorch 2.0
Over the last decade, the landscape of machine learning software development has undergone significant changes. Many frameworks have come and gone, but most have relied heavily on leveraging Nvidia's CUDA and performed best on Nvidia GPUs. However, with the arrival of PyTorch 2.0 and OpenAI's Triton, Nvidia's dominant position in this field, mainly due to its software moat, is being disrupted. This report will touch on topics such as why Google's TensorFlow lost out to PyTorch, why Google hasn't been able to capitalize publicly on its early leadership of AI, the major components of machine learning model training time, the memory capacity/bandwidth/cost wall, model optimization, why other AI hardware companies haven't been able to make a dent in Nvidia's dominance so far, why hardware will start to matter more, how Nvidia's competitive advantage in CUDA is wiped away, and a major win one of Nvidia's competitors has at a large cloud for training silicon. SemiAnalysis is an ad-free, reader-supported publication. To receive new posts and support, consider becoming a free or paid subscriber. The 1,000-foot summary is that the default software stack for machine learning models will no longer be Nvidia's closed-source CUDA. The ball was in Nvidia's court, and they let OpenAI and Meta take control of the software stack. That ecosystem built its own tools because of Nvidia's failure with their proprietary tools, and now Nvidia's moat will be permanently weakened.
Improve the Performance Easily in TensorFlow Using Graph Mode
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Originally, TensorFlow only allowed you to code in Graph Mode, but since the ability to code in Eager Mode was introduced, most notebooks produced are in Eager Mode.
GitHub - zzh8829/yolov3-tf2: YoloV3 Implemented in Tensorflow 2.0
This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices. I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset. For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API. For example you can use Microsoft VOTT to generate such dataset. You can also use this script to create the pascal voc dataset.
Deep Reinforcement Learning With TensorFlow 2.1 Roman Ring
In this tutorial, I will give an overview of the TensorFlow 2.x features through the lens of deep reinforcement learning (DRL) by implementing an advantage actor-critic (A2C) agent, solving the classic CartPole-v0 environment. While the goal is to showcase TensorFlow 2.x, I will do my best to make DRL approachable as well, including a birds-eye overview of the field. In fact, since the main focus of the 2.x release is making life easier for the developers, it's a great time to get into DRL with TensorFlow. For example, the source code for this blog post is under 150 lines, including comments! Code is available on GitHub here and as a notebook on Google Colab here.
#003 TF 2.0 Eager Execution- A Pythonic way of using TensorFlow Master Data Science 24.12.2018
TensorFlow uses Eager execution, which is a more convenient way to execute the code, and also more "Pythonic". It is a default choice in the latest version TensorFlow 2.0. In TensorFlow 1.x, we first need to write a Python program that constructs a graph for our computation, the program then invokes Session.run(), which hands the graph off for execution to the C runtime. This type of programming is called declarative programming (specification of the computation is separated from the execution of it). So, Sessions provide one way to execute these compositions.
Deep Reinforcement Learning with TensorFlow 2.0
In this tutorial, I will showcase the upcoming TensorFlow 2.0 features through the lens of deep reinforcement learning (DRL) by implementing an advantage actor-critic (A2C) agent to solve the classic CartPole-v0 environment. While the goal is to showcase TensorFlow 2.0, I will do my best to make the DRL aspect approachable as well, including a brief overview of the field. In fact, since the main focus of the 2.0 release is making developers' lives easier, it's a great time to get into DRL with TensorFlow -- our full agent source is under 150 lines! The code is available as a notebook here and online on Google Colab here. As TensorFlow 2.0 is still in an experimental stage, I recommend installing it in a separate (virtual) environment.
What's what in TensorFlow 2.0
I think everyone can agree the new TensorFlow 2.0 is a revolution rather than evolution. It has greatly simplified almost every aspect of the clunky TF1. And while the TensorFlow programmers made it easier to transition to the new Framework by creating the TF2 Upgrade Script, they have undeniably complicated things a bit for newcomers. We now live in a world of billion samples and pieces of StackOverflow snippets and information that at least in the beginning are hard to navigate. You never know what is TensorFlow 1 or 2 or in-between as there was an in-between phase too to make things worse.
PyTorch and TensorFlow: Which ML Framework is More Popular in Academia and Industry
Horace He recently published an article summarising The State of Machine Learning Frameworks in 2019. The article utilizes several metrics to argue the point that PyTorch is quickly becoming the dominant framework for research, whereas TensorFlow is the dominant framework for applications deployed within a commercial/industrial context. He, a research student at Cornell University, counted the number of papers discusing either PyTorch or TensorFlow that were presented at a series of well-known machine learning oriented conferences, namely ECCV, NIPS, ACL, NAACL, ICML, CVPR, ICLR, ICCV and EMNLP. In summary, the majority of papers were implemented in PyTorch for every major conference in 2019. PyTorch outnumbered TensorFlow by 2:1 in vision related conferences and 3:1 in language related conferences.